Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Diagn Progn Res ; 7(1): 8, 2023 Apr 04.
Article in English | MEDLINE | ID: covidwho-2288226

ABSTRACT

BACKGROUND: The COVID-19 pandemic has a large impact worldwide and is known to particularly affect the older population. This paper outlines the protocol for external validation of prognostic models predicting mortality risk after presentation with COVID-19 in the older population. These prognostic models were originally developed in an adult population and will be validated in an older population (≥ 70 years of age) in three healthcare settings: the hospital setting, the primary care setting, and the nursing home setting. METHODS: Based on a living systematic review of COVID-19 prediction models, we identified eight prognostic models predicting the risk of mortality in adults with a COVID-19 infection (five COVID-19 specific models: GAL-COVID-19 mortality, 4C Mortality Score, NEWS2 + model, Xie model, and Wang clinical model and three pre-existing prognostic scores: APACHE-II, CURB65, SOFA). These eight models will be validated in six different cohorts of the Dutch older population (three hospital cohorts, two primary care cohorts, and a nursing home cohort). All prognostic models will be validated in a hospital setting while the GAL-COVID-19 mortality model will be validated in hospital, primary care, and nursing home settings. The study will include individuals ≥ 70 years of age with a highly suspected or PCR-confirmed COVID-19 infection from March 2020 to December 2020 (and up to December 2021 in a sensitivity analysis). The predictive performance will be evaluated in terms of discrimination, calibration, and decision curves for each of the prognostic models in each cohort individually. For prognostic models with indications of miscalibration, an intercept update will be performed after which predictive performance will be re-evaluated. DISCUSSION: Insight into the performance of existing prognostic models in one of the most vulnerable populations clarifies the extent to which tailoring of COVID-19 prognostic models is needed when models are applied to the older population. Such insight will be important for possible future waves of the COVID-19 pandemic or future pandemics.

2.
Clin Microbiol Infect ; 2022 Nov 13.
Article in English | MEDLINE | ID: covidwho-2285403

ABSTRACT

OBJECTIVES: To assess the performances of three commonly used antigen rapid diagnostic tests used as self-tests in asymptomatic individuals in the Omicron period. METHODS: We performed a cross-sectional diagnostic test accuracy study in the Omicron period in three public health service COVID-19 test sites in the Netherlands, including 3600 asymptomatic individuals aged ≥ 16 years presenting for SARS-CoV-2 testing for any reason except confirmatory testing after a positive self-test. Participants were sampled for RT-PCR (reference test) and received one self-test (either Acon Flowflex [Flowflex], MP Biomedicals (MPBio), or Siemens-Healthineers CLINITEST [CLINITEST]) to perform unsupervised at home. Diagnostic accuracies of each self-test were calculated. RESULTS: Overall sensitivities were 27.5% (95% CI, 21.3-34.3%) for Flowflex, 20.9% (13.9-29.4%) for MPBio, and 25.6% (19.1-33.1%) for CLINITEST. After applying a viral load cut-off (≥5.2 log10 SARS-CoV-2 E-gene copies/mL), sensitivities increased to 48.3% (37.6-59.2%), 37.8% (22.5-55.2%), and 40.0% (29.5-51.2%), respectively. Specificities were >99% for all tests in most analyses. DISCUSSION: The sensitivities of three commonly used SARS-CoV-2 antigen rapid diagnostic tests when used as self-tests in asymptomatic individuals in the Omicron period were very low. Antigen rapid diagnostic test self-testing in asymptomatic individuals may only detect a minority of infections at that point in time. Repeated self-testing in case of a negative self-test is advocated to improve the diagnostic yield, and individuals should be advised to re-test when symptoms develop.

3.
Clin Microbiol Infect ; 2022 Aug 04.
Article in English | MEDLINE | ID: covidwho-2256174

ABSTRACT

BACKGROUND: Prognostic models are typically developed to estimate the risk that an individual in a particular health state will develop a particular health outcome, to support (shared) decision making. Systematic reviews of prognostic model studies can help identify prognostic models that need to further be validated or are ready to be implemented in healthcare. OBJECTIVES: To provide a step-by-step guidance on how to conduct and read a systematic review of prognostic model studies and to provide an overview of methodology and guidance available for every step of the review progress. SOURCES: Published, peer-reviewed guidance articles. CONTENT: We describe the following steps for conducting a systematic review of prognosis studies: 1) Developing the review question using the Population, Index model, Comparator model, Outcome(s), Timing, Setting format, 2) Searching and selection of articles, 3) Data extraction using the Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies (CHARMS) checklist, 4) Quality and risk of bias assessment using the Prediction model Risk Of Bias ASsessment (PROBAST) tool, 5) Analysing data and undertaking quantitative meta-analysis, and 6) Presenting summary of findings, interpreting results, and drawing conclusions. Guidance for each step is described and illustrated using a case study on prognostic models for patients with COVID-19. IMPLICATIONS: Guidance for conducting a systematic review of prognosis studies is available, but the implications of these reviews for clinical practice and further research highly depend on complete reporting of primary studies.

4.
J Clin Epidemiol ; 154: 75-84, 2023 02.
Article in English | MEDLINE | ID: covidwho-2241601

ABSTRACT

OBJECTIVES: To assess improvement in the completeness of reporting coronavirus (COVID-19) prediction models after the peer review process. STUDY DESIGN AND SETTING: Studies included in a living systematic review of COVID-19 prediction models, with both preprint and peer-reviewed published versions available, were assessed. The primary outcome was the change in percentage adherence to the transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) reporting guidelines between pre-print and published manuscripts. RESULTS: Nineteen studies were identified including seven (37%) model development studies, two external validations of existing models (11%), and 10 (53%) papers reporting on both development and external validation of the same model. Median percentage adherence among preprint versions was 33% (min-max: 10 to 68%). The percentage adherence of TRIPOD components increased from preprint to publication in 11/19 studies (58%), with adherence unchanged in the remaining eight studies. The median change in adherence was just 3 percentage points (pp, min-max: 0-14 pp) across all studies. No association was observed between the change in percentage adherence and preprint score, journal impact factor, or time between journal submission and acceptance. CONCLUSIONS: The preprint reporting quality of COVID-19 prediction modeling studies is poor and did not improve much after peer review, suggesting peer review had a trivial effect on the completeness of reporting during the pandemic.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Prognosis , Pandemics
5.
BMC Med ; 20(1): 406, 2022 10 24.
Article in English | MEDLINE | ID: covidwho-2089197

ABSTRACT

BACKGROUND: The diagnostic accuracy of unsupervised self-testing with rapid antigen diagnostic tests (Ag-RDTs) is mostly unknown. We studied the diagnostic accuracy of a self-performed SARS-CoV-2 saliva and nasal Ag-RDT in the general population. METHODS: This large cross-sectional study consecutively included unselected individuals aged ≥ 16 years presenting for SARS-CoV-2 testing at three public health service test sites. Participants underwent molecular test sampling and received two self-tests (the Hangzhou AllTest Biotech saliva self-test and the SD Biosensor nasal self-test by Roche Diagnostics) to perform themselves at home. Diagnostic accuracy of both self-tests was assessed with molecular testing as reference. RESULTS: Out of 2819 participants, 6.5% had a positive molecular test. Overall sensitivities were 46.7% (39.3-54.2%) for the saliva Ag-RDT and 68.9% (61.6-75.6%) for the nasal Ag-RDT. With a viral load cut-off (≥ 5.2 log10 SARS-CoV-2 E-gene copies/mL) as a proxy of infectiousness, these sensitivities increased to 54.9% (46.4-63.3%) and 83.9% (76.9-89.5%), respectively. For the nasal Ag-RDT, sensitivities were 78.5% (71.1-84.8%) and 22.6% (9.6-41.1%) in those symptomatic and asymptomatic at the time of sampling, which increased to 90.4% (83.8-94.9%) and 38.9% (17.3-64.3%) after applying the viral load cut-off. In those with and without prior SARS-CoV-2 infection, sensitivities were 36.8% (16.3-61.6%) and 72.7% (65.1-79.4%). Specificities were > 99% and > 99%, positive predictive values > 70% and > 90%, and negative predictive values > 95% and > 95%, for the saliva and nasal Ag-RDT, respectively, in most analyses. Most participants considered the self-performing and result interpretation (very) easy for both self-tests. CONCLUSIONS: The Hangzhou AllTest Biotech saliva self Ag-RDT is not reliable for SARS-CoV-2 detection, overall, and in all studied subgroups. The SD Biosensor nasal self Ag-RDT had high sensitivity in individuals with symptoms and in those without prior SARS-CoV-2 infection but low sensitivity in asymptomatic individuals and those with a prior SARS-CoV-2 infection which warrants further investigation.


Subject(s)
COVID-19 , SARS-CoV-2 , Humans , COVID-19/diagnosis , Cross-Sectional Studies , COVID-19 Testing , Saliva , Sensitivity and Specificity , Antigens, Viral
6.
BMJ ; 378: e071215, 2022 09 14.
Article in English | MEDLINE | ID: covidwho-2029495

ABSTRACT

OBJECTIVE: To assess the performance of rapid antigen tests with unsupervised nasal and combined oropharyngeal and nasal self-sampling during the omicron period. DESIGN: Prospective cross sectional diagnostic test accuracy study. SETTING: Three public health service covid-19 test sites in the Netherlands, 21 December 2021 to 10 February 2022. PARTICIPANTS: 6497 people with covid-19 symptoms aged ≥16 years presenting for testing. INTERVENTIONS: Participants had a swab sample taken for reverse transcription polymerase chain reaction (RT-PCR, reference test) and received one rapid antigen test to perform unsupervised using either nasal self-sampling (during the emergence of omicron, and when omicron accounted for >90% of infections, phase 1) or with combined oropharyngeal and nasal self-sampling in a subsequent (phase 2; when omicron accounted for >99% of infections). The evaluated tests were Flowflex (Acon Laboratories; phase 1 only), MPBio (MP Biomedicals), and Clinitest (Siemens-Healthineers). MAIN OUTCOME MEASURES: The main outcomes were sensitivity, specificity, and positive and negative predictive values of each self-test, with RT-PCR testing as the reference standard. RESULTS: During phase 1, 45.0% (n=279) of participants in the Flowflex group, 29.1% (n=239) in the MPBio group, and 35.4% ((n=257) in the Clinitest group were confirmatory testers (previously tested positive by a self-test at own initiative). Overall sensitivities with nasal self-sampling were 79.0% (95% confidence interval 74.7% to 82.8%) for Flowflex, 69.9% (65.1% to 74.4%) for MPBio, and 70.2% (65.6% to 74.5%) for Clinitest. Sensitivities were substantially higher in confirmatory testers (93.6%, 83.6%, and 85.7%, respectively) than in those who tested for other reasons (52.4%, 51.5%, and 49.5%, respectively). Sensitivities decreased from 87.0% to 80.9% (P=0.16 by χ2 test), 80.0% to 73.0% (P=0.60), and 83.1% to 70.3% (P=0.03), respectively, when transitioning from omicron accounting for 29% of infections to >95% of infections. During phase 2, 53.0% (n=288) of participants in the MPBio group and 44.4% (n=290) in the Clinitest group were confirmatory testers. Overall sensitivities with combined oropharyngeal and nasal self-sampling were 83.0% (78.8% to 86.7%) for MPBio and 77.3% (72.9% to 81.2%) for Clinitest. When combined oropharyngeal and nasal self-sampling was compared with nasal self-sampling, sensitivities were found to be slightly higher in confirmatory testers (87.4% and 86.1%, respectively) and substantially higher in those testing for other reasons (69.3% and 59.9%, respectively). CONCLUSIONS: Sensitivities of three rapid antigen tests with nasal self-sampling decreased during the emergence of omicron but was only statistically significant for Clinitest. Sensitivities appeared to be substantially influenced by the proportion of confirmatory testers. Sensitivities of MPBio and Clinitest improved after the addition of oropharyngeal to nasal self-sampling. A positive self-test result justifies prompt self-isolation without the need for confirmatory testing. Individuals with a negative self-test result should adhere to general preventive measures because a false negative result cannot be ruled out. Manufacturers of MPBio and Clinitest may consider extending their instructions for use to include combined oropharyngeal and nasal self-sampling, and other manufacturers of rapid antigen tests should consider evaluating this as well.


Subject(s)
COVID-19 , COVID-19/diagnosis , COVID-19 Testing , Citric Acid , Copper Sulfate , Cross-Sectional Studies , Humans , Prospective Studies , Sodium Bicarbonate , Specimen Handling , United States
7.
BMJ ; 378: e069881, 2022 07 12.
Article in English | MEDLINE | ID: covidwho-1932661

ABSTRACT

OBJECTIVE: To externally validate various prognostic models and scoring rules for predicting short term mortality in patients admitted to hospital for covid-19. DESIGN: Two stage individual participant data meta-analysis. SETTING: Secondary and tertiary care. PARTICIPANTS: 46 914 patients across 18 countries, admitted to a hospital with polymerase chain reaction confirmed covid-19 from November 2019 to April 2021. DATA SOURCES: Multiple (clustered) cohorts in Brazil, Belgium, China, Czech Republic, Egypt, France, Iran, Israel, Italy, Mexico, Netherlands, Portugal, Russia, Saudi Arabia, Spain, Sweden, United Kingdom, and United States previously identified by a living systematic review of covid-19 prediction models published in The BMJ, and through PROSPERO, reference checking, and expert knowledge. MODEL SELECTION AND ELIGIBILITY CRITERIA: Prognostic models identified by the living systematic review and through contacting experts. A priori models were excluded that had a high risk of bias in the participant domain of PROBAST (prediction model study risk of bias assessment tool) or for which the applicability was deemed poor. METHODS: Eight prognostic models with diverse predictors were identified and validated. A two stage individual participant data meta-analysis was performed of the estimated model concordance (C) statistic, calibration slope, calibration-in-the-large, and observed to expected ratio (O:E) across the included clusters. MAIN OUTCOME MEASURES: 30 day mortality or in-hospital mortality. RESULTS: Datasets included 27 clusters from 18 different countries and contained data on 46 914patients. The pooled estimates ranged from 0.67 to 0.80 (C statistic), 0.22 to 1.22 (calibration slope), and 0.18 to 2.59 (O:E ratio) and were prone to substantial between study heterogeneity. The 4C Mortality Score by Knight et al (pooled C statistic 0.80, 95% confidence interval 0.75 to 0.84, 95% prediction interval 0.72 to 0.86) and clinical model by Wang et al (0.77, 0.73 to 0.80, 0.63 to 0.87) had the highest discriminative ability. On average, 29% fewer deaths were observed than predicted by the 4C Mortality Score (pooled O:E 0.71, 95% confidence interval 0.45 to 1.11, 95% prediction interval 0.21 to 2.39), 35% fewer than predicted by the Wang clinical model (0.65, 0.52 to 0.82, 0.23 to 1.89), and 4% fewer than predicted by Xie et al's model (0.96, 0.59 to 1.55, 0.21 to 4.28). CONCLUSION: The prognostic value of the included models varied greatly between the data sources. Although the Knight 4C Mortality Score and Wang clinical model appeared most promising, recalibration (intercept and slope updates) is needed before implementation in routine care.


Subject(s)
COVID-19 , Models, Statistical , Data Analysis , Hospital Mortality , Humans , Prognosis
8.
BMJ ; 374: n1676, 2021 Jul 27.
Article in English | MEDLINE | ID: covidwho-1329048

ABSTRACT

OBJECTIVE: To assess the diagnostic test accuracy of two rapid antigen tests in asymptomatic and presymptomatic close contacts of people with SARS-CoV-2 infection on day 5 after exposure. DESIGN: Prospective cross sectional study. SETTING: Four public health service covid-19 test sites in the Netherlands. PARTICIPANTS: 4274 consecutively included close contacts (identified through test-and-trace programme or contact tracing app) aged 16 years or older and asymptomatic for covid-19 when requesting a test. MAIN OUTCOME MEASURES: Sensitivity, specificity, and positive and negative predictive values of Veritor System (Beckton Dickinson) and Biosensor (Roche Diagnostics) rapid antigen tests, with reverse-transcriptase polymerase chain reaction (RT-PCR) testing as reference standard. The viral load cut-off above which 95% of people with a positive RT-PCR test result were virus culture positive was used as a proxy of infectiousness. RESULTS: Of 2678 participants tested with Veritor, 233 (8.7%) had a RT-PCR confirmed SARS-CoV-2 infection of whom 149 were also detected by the rapid antigen test (sensitivity 63.9%, 95% confidence interval 57.4% to 70.1%). Of 1596 participants tested with Biosensor, 132 (8.3%) had a RT-PCR confirmed SARS-CoV-2 infection of whom 83 were detected by the rapid antigen test (sensitivity 62.9%, 54.0% to 71.1%). In those who were still asymptomatic at the time of sampling, sensitivity was 58.7% (51.1% to 66.0%) for Veritor (n=2317) and 59.4% (49.2% to 69.1%) for Biosensor (n=1414), and in those who developed symptoms were 84.2% (68.7% to 94.0%; n=219) for Veritor and 73.3% (54.1% to 87.7%; n=158) for Biosensor. When a viral load cut-off was applied for infectiouness (≥5.2 log10 SARS-CoV-2 E gene copies/mL), the overall sensitivity was 90.1% (84.2% to 94.4%) for Veritor and 86.8% (78.1% to 93.0%) for Biosensor, and 88.1% (80.5% to 93.5%) for Veritor and 85.1% (74.3% to 92.6%) for Biosensor, among those who remained asymptomatic throughout. Specificities were >99%, and positive and negative predictive values were >90% and >95%, for both rapid antigen tests in all analyses. CONCLUSIONS: The sensitivities of both rapid antigen tests in asymptomatic and presymptomatic close contacts tested on day 5 onwards after close contact with an index case were more than 60%, increasing to more than 85% after a viral load cut-off was applied as a proxy for infectiousness.

9.
BMJ Open ; 11(7): e050519, 2021 07 12.
Article in English | MEDLINE | ID: covidwho-1307918

ABSTRACT

OBJECTIVE: To systematically review evidence on effectiveness of contact tracing apps (CTAs) for SARS-CoV-2 on epidemiological and clinical outcomes. DESIGN: Rapid systematic review. DATA SOURCES: EMBASE (OVID), MEDLINE (PubMed), BioRxiv and MedRxiv were searched up to 28 October 2020. STUDY SELECTION: Studies, both empirical and model-based, assessing effect of CTAs for SARS-CoV-2 on reproduction number (R), total number of infections, hospitalisation rate, mortality rate, and other epidemiologically and clinically relevant outcomes, were eligible for inclusion. DATA EXTRACTION: Empirical and model-based studies were critically appraised using separate checklists. Data on type of study (ie, empirical or model-based), sample size, (simulated) time horizon, study population, CTA type (and associated interventions), comparator and outcomes assessed, were extracted. The most important findings were extracted and narratively summarised. Specifically for model-based studies, characteristics and values of important model parameters were collected. RESULTS: 2140 studies were identified, of which 17 studies (2 empirical, 15 model-based studies) were eligible and included in this review. Both empirical studies were observational (non-randomised) studies and at high risk of bias, most importantly due to risk of confounding. Risk of bias of model-based studies was considered low for 12 out of 15 studies. Most studies demonstrated beneficial effects of CTAs on R, total number of infections and mortality rate. No studies assessed effect on hospitalisation. Effect size was dependent on model parameters values used, but in general, a beneficial effect was observed at CTA adoption rates of 20% or higher. CONCLUSIONS: CTAs have the potential to be effective in reducing SARS-CoV-2 related epidemiological and clinical outcomes, though effect size depends on other model parameters (eg, proportion of asymptomatic individuals, or testing delays), and interventions after CTA notification. Methodologically sound comparative empirical studies on effectiveness of CTAs are required to confirm findings from model-based studies.


Subject(s)
COVID-19 , Contact Tracing , SARS-CoV-2 , Bias , Humans
10.
Eur Respir J ; 59(2)2022 02.
Article in English | MEDLINE | ID: covidwho-1282234

ABSTRACT

INTRODUCTION: The individual prognostic factors for coronavirus disease 2019 (COVID-19) are unclear. For this reason, we aimed to present a state-of-the-art systematic review and meta-analysis on the prognostic factors for adverse outcomes in COVID-19 patients. METHODS: We systematically reviewed PubMed from 1 January 2020 to 26 July 2020 to identify non-overlapping studies examining the association of any prognostic factor with any adverse outcome in patients with COVID-19. Random-effects meta-analysis was performed, and between-study heterogeneity was quantified using I2 statistic. Presence of small-study effects was assessed by applying the Egger's regression test. RESULTS: We identified 428 eligible articles, which were used in a total of 263 meta-analyses examining the association of 91 unique prognostic factors with 11 outcomes. Angiotensin-converting enzyme inhibitors, obstructive sleep apnoea, pharyngalgia, history of venous thromboembolism, sex, coronary heart disease, cancer, chronic liver disease, COPD, dementia, any immunosuppressive medication, peripheral arterial disease, rheumatological disease and smoking were associated with at least one outcome and had >1000 events, p<0.005, I2<50%, 95% prediction interval excluding the null value, and absence of small-study effects in the respective meta-analysis. The risk of bias assessment using the Quality in Prognosis Studies tool indicated high risk of bias in 302 out of 428 articles for study participation, 389 articles for adjustment for other prognostic factors and 396 articles for statistical analysis and reporting. CONCLUSIONS: Our findings could be used for prognostic model building and guide patient selection for randomised clinical trials.


Subject(s)
COVID-19 , Bias , Humans , Prognosis , SARS-CoV-2
11.
BMJ ; 369: m1328, 2020 04 07.
Article in English | MEDLINE | ID: covidwho-648504

ABSTRACT

OBJECTIVE: To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. DESIGN: Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. DATA SOURCES: PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. STUDY SELECTION: Studies that developed or validated a multivariable covid-19 related prediction model. DATA EXTRACTION: At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). RESULTS: 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. CONCLUSION: Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. SYSTEMATIC REVIEW REGISTRATION: Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. READERS' NOTE: This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.


Subject(s)
Coronavirus Infections/diagnosis , Models, Theoretical , Pneumonia, Viral/diagnosis , COVID-19 , Coronavirus , Disease Progression , Hospitalization/statistics & numerical data , Humans , Multivariate Analysis , Pandemics , Prognosis
SELECTION OF CITATIONS
SEARCH DETAIL